-
Notifications
You must be signed in to change notification settings - Fork 422
Ensure MultiSyncDataCollectors returns data ordered by worker id
#3243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/3243
Note: Links to docs will display an error until the docs builds have been completed. ❌ 10 New Failures, 1 Cancelled Job, 10 Unrelated FailuresAs of commit c428fd4 with merge base 8570c25 ( NEW FAILURES - The following jobs have failed:
CANCELLED JOB - The following job was cancelled. Please retry:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
LCarmi
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left some open comments on some open points I still have.
Moreover, should this guarantee be specified anywhere in the documentation of MultiSyncDataCollector.__init__ or _MultiDataCollector.__init__?
| buffers = {} | ||
| for worker_idx, buffer in self.buffers.items(): | ||
| buffers = [None] * self.num_workers | ||
| for idx, buffer in enumerate(filter(None.__ne__, self.buffers)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to filter out buffers who did not return their experience: I use the enumerate(filter(None.__ne__, self.buffers)) idiom to make this compact and hopefully readable; I'm open to better ideas
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's ok
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but you define idx which was defined earlier (LoC 3829 or 3840). It should be worker_idx I believe
See my comment below
| # Skip frame counting if this worker didn't send data this iteration | ||
| # (happens when reusing buffers or on first iteration with some workers) | ||
| if idx not in buffers: | ||
| continue | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am puzzled by this code, and I miss where it could happen that idx is defined but the related buffer be None
An equivalent code here would be if buffers[idx] is None: continue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This happens during preemption: if we say that we're ok with 80% of the data it could be that we don't have data for one of the workers and we just return whatever we have at this stage.
vmoens
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these changes make sense but there seems to be a bug in idx vs worker_idx naming in the preemption case
Other than that lgtm!
| buffers = {} | ||
| for worker_idx, buffer in self.buffers.items(): | ||
| buffers = [None] * self.num_workers | ||
| for idx, buffer in enumerate(filter(None.__ne__, self.buffers)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's ok
| buffers = {} | ||
| for worker_idx, buffer in self.buffers.items(): | ||
| buffers = [None] * self.num_workers | ||
| for idx, buffer in enumerate(filter(None.__ne__, self.buffers)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but you define idx which was defined earlier (LoC 3829 or 3840). It should be worker_idx I believe
See my comment below
| # Skip frame counting if this worker didn't send data this iteration | ||
| # (happens when reusing buffers or on first iteration with some workers) | ||
| if idx not in buffers: | ||
| continue | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This happens during preemption: if we say that we're ok with 80% of the data it could be that we don't have data for one of the workers and we just return whatever we have at this stage.
Description
Changes the
self.buffersdata structure inMultiSyncDataCollectorsto a list in order to ensure that the returned batch respects theworker_idorder.Motivation and Context
The main motivation behind this change is to enable an id-based retrieval of experience when sampling from multiple, possibly different environments/policies. More in detail, if the user specifies a list of environments/policies, this change guarantees that
batch[i]corresponds to the experience sampled from theith environment and policy.This was not previously available because the
self.bufferwas previously a dict, and different processes return experience possibly out-of-order due to having unpredictable latency. Since a dict is iterated in insertion order, the relative order between worker ids was not maintaned.Note: if preemption is enabled, then the results will just maintain relative order, since we are injecting "bubbles" in our returned batch.
Types of changes
What types of changes does your code introduce? Remove all that do not apply:
Checklist
Go over all the following points, and put an
xin all the boxes that apply.If you are unsure about any of these, don't hesitate to ask. We are here to help!